Linguistic Bias in Collaboratively Produced Biographies: Crowdsourcing Social Stereotypes?

نویسنده

  • Jahna Otterbacher
چکیده

Language is the primary medium through which stereotypes are conveyed. Even when we avoid using derogatory language, there are many subtle ways in which stereotypes are created and reinforced, and they often go unnoticed. Linguistic bias, the systematic asymmetry in language patterns as a function of the social group of the persons described, may play a key role. We ground our study in the social psychology literature on linguistic biases, and consider two ways in which biases might manifest: through the use of more abstract versus concrete language, and subjective words. We analyze biographies of African American and Caucasian actors at the Internet Movie Database (IMDb), hypothesizing that language patterns vary as a function of race and gender. We find that both attributes are correlated to the use of abstract, subjective language. Theory predicts that we describe people and scenes that are expected, as well as positive aspects of our in-group members, with more abstract language. Indeed, white actors are described with more abstract, subjective language at IMDb, as compared to other social groups. Abstract language is powerful because it implies stability over time; studies have shown that people have better impressions of others described in abstract terms. Therefore, the widespread prevalence of linguistic biases in social media stands to reinforce social stereotypes. Further work should consider the technical and social characteristics of the collaborative writing process that lead to an increase or decrease in linguistic biases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Age-Related Stereotypes and the Linguistic Intergroup Bias 1 Running head: AGE-RELATED STEREOTYPES AND THE LINGUISTIC INTERGROUP BIAS An Examination of Age-Related Stereotypes and the Linguistic Intergroup Bias Using Two Measures

The linguistic intergroup bias is a phenomenon where people use more abstract language to talk positively about in-groups and negatively about out-groups (Maass, Salvi, Arcuri, & Semin, 1989). This has been established for many in-groups, but has not been extended to age-related stereotypes. This study extended the linguistic intergroup bias to attitudes towards older adults. It was predicted t...

متن کامل

Shirtless and Dangerous: Quantifying Linguistic Signals of Gender Bias in an Online Fiction Writing Community

Imagine a princess asleep in a castle, waiting for her prince to slay the dragon and rescue her. Tales like the famous Sleeping Beauty clearly divide up gender roles. But what about more modern stories, borne of a generation increasingly aware of social constructs like sexism and racism? Do these stories tend to reinforce gender stereotypes, or counter them? In this paper, we present a techniqu...

متن کامل

How do we communicate stereotypes? Linguistic bases and inferential consequences.

The linguistic expectancy bias is defined as the tendency to describe expectancy-consistent information at a higher level of abstraction than expectancy-inconsistent information. The communicative consequences of this bias were examined in 3 experiments. Analyses of judgments that recipients made on the basis of linguistically biased information generated by transmitters indicated that behavior...

متن کامل

Language use in intergroup contexts: the linguistic intergroup bias.

Three experiments examine how the type of language used to describe in-group and out-group behaviors contributes to the transmission and persistence of social stereotypes. Two experiments tested the hypothesis that people encode and communicate desirable in-group and undesirable out-group behaviors more abstractly than undesirable in-group and desirable out-group behaviors. Experiment 1 provide...

متن کامل

Quantifying and Reducing Stereotypes in Word Embeddings

Machine learning algorithms are optimized to model statistical properties of the training data. If the input data reflects stereotypes and biases of the broader society, then the output of the learning algorithm also captures these stereotypes. In this paper, we initiate the study of gender stereotypes in word embedding, a popular framework to represent text data. As their use becomes increasin...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015